11 research outputs found

    Universal Composability is Secure Compilation

    Get PDF
    Universal composability is a framework for the specification and analysis of cryptographic protocols with a strong compositionality guarantee: UC protocols are secure even when composed with other protocols. Secure compilation tells whether compiled programs are as secure as their source-level counterparts, no matter what target-level code they interact with. These two disciplines are studied in isolation, but we believe there is a deeper connection between them with benefits from both worlds to reap. This paper outlines the connection between universal composability and robust compilation, the latest of secure compilation theories. We show how to read the universal composability theorem in terms of a robust compilation theorem and vice-versa. This, in turn, shows which elements of one theory corresponds to which element in the other theory. We believe this is the first step towards understanding how can secure compilation theories be used in universal composability settings and vice-versa

    Specialized Proof of Confidential Knowledge (SPoCK)

    Get PDF
    Flow is a high-throughput blockchain with a dedicated step for executing the transactions in a block and a subsequent verification step performed by Verification Nodes. To enforce integrity of the blockchain, the protocol requires a component that prevents Verification Nodes from approving execution results without checking. In our preceding work, we have sketched out an approach called Specialized Proof of Confidential Knowledge (SPoCK). Using SPoCK, nodes can provide evidence to a third party that they both executed the same transaction sequence without revealing the resulting execution trace. The previous Flow white paper presented a basic implementation of such scheme. In this note, we introduce a new SPoCK implementation that is more concise and more efficient than the previous proposal. We first provide a formal generic description of a SPoCK scheme as well as its security definition. Then we propose a new construction of SPoCK based on the BLS signature scheme. We support the new scheme with its proof of security under the appropriate computation assumptions

    Scaling Verifiable Computation Using Efficient Set Accumulators

    Get PDF
    Verifiable outsourcing systems offload a large computation to a remote server, but require that the remote server provide a succinct proof, called a SNARK, that proves that the server carried out the computation correctly. Real-world applications of this approach can be found in several blockchain systems that employ verifiable outsourcing to process a large number of transactions off-chain. This reduces the on-chain work to simply verifying a succinct proof that transaction processing was done correctly. In practice, verifiable outsourcing of state updates is done by updating the leaves of a Merkle tree, recomputing the resulting Merkle root, and proving using a SNARK that the state update was done correctly. In this work, we use a combination of existing and novel techniques to implement an RSA accumulator inside of a SNARK, and use it as a replacement for a Merkle tree. We specifically optimize the accumulator for compatibility with SNARKs. Our experiments show that the resulting system reduces costs compared to existing approaches that use Merkle trees for committing to the current state. These results apply broadly to any system that needs to offload batches of state updates to an untrusted server

    Verifiable ASICs

    Get PDF
    A manufacturer of custom hardware (ASICs) can undermine the intended execution of that hardware; high-assurance execution thus requires controlling the manufacturing chain. However, a trusted platform might be orders of magnitude worse in performance or price than an advanced, untrusted platform. This paper initiates exploration of an alternative: using verifiable computation (VC), an untrusted ASIC computes proofs of correct execution, which are verified by a trusted processor or ASIC. In contrast to the usual VC setup, here the prover and verifier together must impose less overhead than the alternative of executing directly on the trusted platform. We instantiate this approach by designing and implementing physically realizable, area-efficient, high throughput ASICs (for a prover and verifier), in fully synthesizable Verilog. The system, called Zebra, is based on the CMT and Allspice interactive proof protocols, and required new observations about CMT, careful hardware design, and attention to architectural challenges. For a class of real computations, Zebra meets or exceeds the performance of executing directly on the trusted platform

    Doubly-efficient zkSNARKs without trusted setup

    Get PDF
    We present a zero-knowledge argument for NP with low communication complexity, low concrete cost for both the prover and the verifier, and no trusted setup, based on standard cryptographic assumptions. Communication is proportional to dlogGd\cdot\log G (for dd the depth and GG the width of the verifying circuit) plus the square root of the witness size. When applied to batched or data-parallel statements, the prover\u27s runtime is linear and the verifier\u27s is sub-linear in the verifying circuit size, both with good constants. In addition, witness-related communication can be reduced, at the cost of increased verifier runtime, by leveraging a new commitment scheme for multilinear polynomials, which may be of independent interest. These properties represent a new point in the tradeoffs among setup, complexity assumptions, proof size, and computational cost. We apply the Fiat-Shamir heuristic to this argument to produce a zero-knowledge succinct non-interactive argument of knowledge (zkSNARK) in the random oracle model, based on the discrete log assumption, which we call Hyrax. We implement Hyrax and evaluate it against five state-of-the-art baseline systems. Our evaluation shows that, even for modest problem sizes, Hyrax gives smaller proofs than all but the most computationally costly baseline, and that its prover and verifier are each faster than three of the five baselines

    Silph: A Framework for Scalable and Accurate Generation of Hybrid MPC Protocols

    Get PDF
    Many applications in finance and healthcare need access to data from multiple organizations. While these organizations can benefit from computing on their joint datasets, they often cannot share data with each other due to regulatory constraints and business competition. One way mutually distrusting parties can collaborate without sharing their data in the clear is to use secure multiparty computation (MPC). However, MPC’s performance presents a serious obstacle for adoption as it is difficult for users who lack expertise in advanced cryptography to optimize. In this paper, we present Silph, a framework that can automatically compile a program written in a high-level language to an optimized, hybrid MPC protocol that mixes multiple MPC primitives securely and efficiently. Compared to prior works, our compilation speed is improved by up to 30000×. On various database analytics and machine learning workloads, the MPC protocols generated by Silph match or outperform prior work by up to 3.6×

    Fast and simple constant-time hashing to the BLS12-381 elliptic curve

    Get PDF
    Pairing-friendly elliptic curves in the Barreto-Lynn-Scott family are seeing a resurgence in popularity because of the recent result of Kim and Barbulescu that improves attacks against other pairing-friendly curve families. One particular Barreto-Lynn-Scott curve, called BLS12-381, is the locus of significant development and deployment effort, especially in blockchain applications. This effort has sparked interest in using the BLS12-381 curve for BLS signatures, which requires hashing to one of the groups of the bilinear pairing defined by BLS12-381.While there is a substantial body of literature on the problem of hashing to elliptic curves, much of this work does not apply to Barreto-Lynn-Scott curves. Moreover, the work that does apply has the unfortunate property that fast implementations are complex, while simple implementations are slow.In this work, we address these issues. First, we show a straightforward way of adapting the “simplified SWU” map of Brier et al. to BLS12-381. Second, we describe optimizations to this map that both simplify its implementation and improve its performance; these optimizations may be of interest in other contexts. Third, we implement and evaluate. We find that our work yields constant-time hash functions that are simple to implement, yet perform within 9% of the fastest, non–constant-time alternatives, which require much more complex implementations

    Universal Composability is Secure Compilation

    Get PDF
    Universal composability is a framework for the specification and analysis of cryptographic protocols with a strong compositionality guarantee: UC protocols are secure even when composed with other protocols. Secure compilation tells whether compiled programs are as secure as their source-level counterparts, no matter what target-level code they interact with. These two disciplines are studied in isolation, but we believe there is a deeper connection between them with benefits from both worlds to reap. This paper outlines the connection between universal composability and robust compilation, the latest of secure compilation theories. We show how to read the universal composability theorem in terms of a robust compilation theorem and vice-versa. This, in turn, shows which elements of one theory corresponds to which element in the other theory. We believe this is the first step towards understanding how can secure compilation theories be used in universal composability settings and vice-versa

    Compact Certificates of Collective Knowledge

    No full text

    New Architectures for Radio-Frequency DC-DC Power Conversion

    No full text
    This document proposes two new architectures for switched-mode dc–dc power conversion. The proposed architectures enable dramatic increases in switching frequency to be realized while preserving features critical in practice, including regulation of the output across a wide load range and high light-load efficiency. This is achieved in part by how the energy conversion and regulation functions are partitioned. The structure and control approach of the new architectures are described, along with representative implementation methods. The design and experimental evaluation of prototype systems with cells operating at 100MHz are also described. It is anticipated that the proposed approaches and ones like them will allow substantial improvements in the size of switching power converters and, in some cases, will permit their integrated fabrication.MIT/Industry Consortium on Advanced Automotive Electrical/Electronic Components and SystemsSoderberg Chair of the Massachusetts Institute of Technolog
    corecore